Skip to content

Conversation

rosenrodt
Copy link
Collaborator

@rosenrodt rosenrodt commented Oct 6, 2025

@coderabbitai summary

Description

  • More performant kernels. mxfp8 x mxfp4 sees the most boost from more kernel configs. Other precisions might observe slight increase in perf from optimized TMA load/store.
  • Autotuner chooses from multiple runner instances of varying tileN sizes. Was choosing from heursitically determined tileN. WARNING: this increases autotuning time.
  • Autotuner must tune with non-0 value tensor. randint(-5, 5) appears to report benchmark result more accurately than randn().

GPT-OSS-120b TP1

Baseline Updated cubin Update cubin + autotune tileN with randint()
Concurrency TPS/user TPS/user Gain over baseline TPS/user Gain over baseline
1 404.158 406.8676 1.01 397.50 0.98
4 291.7377 283.3793 0.97 292.608 1.00
8 216.3125 217.2664 1.00 232.50 1.07
16 176.0417 177.4137 1.01 183.84 1.04
32 133.4404 134.2653 1.01 140.10 1.05
64 100.7758 101.5539 1.01 110.70 1.10
128 71.9943 72.9801 1.01 81.49 1.13
256 51.6263 56.8874 1.10 55.43 1.07
512 33.9069 36.7009 1.08 35.99 1.06
1024 18.7652 20.2948 1.08 19.80 1.06

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@rosenrodt rosenrodt changed the title [None][perf] Update TRTLLM MoE MxFP4 cubins [None][feat] Update TRTLLM MoE MxFP4 cubins Oct 6, 2025
@rosenrodt rosenrodt changed the title [None][feat] Update TRTLLM MoE MxFP4 cubins [None][feat] Update TRTLLM MoE MxFP4 cubins Oct 6, 2025
@rosenrodt
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20675 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20675 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #15619 completed with status: 'FAILURE'

@rosenrodt rosenrodt force-pushed the update-trtllm-moe-cubins branch from cfba343 to af27ef9 Compare October 6, 2025 16:52
@rosenrodt
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20681 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20681 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #15624 completed with status: 'FAILURE'

@rosenrodt rosenrodt force-pushed the update-trtllm-moe-cubins branch from af27ef9 to 6fd1909 Compare October 7, 2025 01:19
@rosenrodt
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20694 [ run ] triggered by Bot

@rosenrodt
Copy link
Collaborator Author

/bot kill

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20735 [ kill ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20694 [ run ] completed with state ABORTED
LLM/main/L0_MergeRequest_PR #15632 (Blue Ocean) completed with status: ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20735 [ kill ] completed with state SUCCESS
Successfully killed previous jobs for commit 6fd1909

@rosenrodt rosenrodt force-pushed the update-trtllm-moe-cubins branch from 6fd1909 to e94659b Compare October 7, 2025 15:28
@rosenrodt rosenrodt requested review from a team as code owners October 7, 2025 15:28
@rosenrodt rosenrodt requested review from liji-nv and yuxianq October 7, 2025 15:28
@rosenrodt rosenrodt force-pushed the update-trtllm-moe-cubins branch 2 times, most recently from fa1783c to bd830ef Compare October 7, 2025 16:58
@rosenrodt
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20740 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20740 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15672 completed with status: 'FAILURE'

@rosenrodt rosenrodt force-pushed the update-trtllm-moe-cubins branch from bd830ef to 729d3b5 Compare October 8, 2025 03:47
@rosenrodt
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20762 [ run ] triggered by Bot

@rosenrodt
Copy link
Collaborator Author

/bot kill

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21349 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #16116 completed with status: 'SUCCESS'

@@ -918,7 +918,8 @@ def _create_tensor_like(self, origin_tensor: torch.Tensor,
# TODO: FIXME, sometimes the content of the tensor can affect the performance, like MOE
# One solution is to manituplate the tensor content to make it more like the real data
# during the tuning process. This can by controlled in the preparation phase by the runner.
return torch.zeros(shapes, dtype=dtype, device=device)
# It must not use all zero tensors. Otherwise the timing results become unreliable.
return torch.randint(-5, 5, shapes, device=device).to(dtype)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will suggest using a customizable pre-hook for creating the dummy tensors. Here is the PR for adding this feature:
#6924
We can replace this change with the pre-hook later.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The prehook in PR#6924 is useful and we should use it once that is merged. I see a need to initialize data differently under different precisions due to the dynamic range difference.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding @nekorobov for vis. When #6924 is merged, we may want to adjust how dummy tensors are initialized during the autotune under different precisions. In order to have ideal mix of 0s and 1s.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#6924 has been merged. Because using random data instead of all-zero tensors can be considered an improvement for the perf stabilization in general, I think we can also keep the code change here. Thanks a lot for the suggestion~

do_finalize,
)
kernel_runners: List[TunableRunner] = []
for tile_tokens_dim_ in list(generate_power_of_2_between(start=8, end=128)):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For this part, I want to confirm:

Currently, for this series of moe runners, tile_tokens_dim is created according to the value of num_tokens through the calculate_tile_tokens_dim method. Thus, when tuning num_tokens, we should determine the tile_tokens_dim after a specific num_tokens is given.

Now we tune each num_tokens with all the possible tile_tokens_dim. But this only stands when the two values are independent of each other. Has it been decoupled already?

But I still see calculate_tile_tokens_dim is used in the fake registration part and some other places, which means tile_tokens_dim is still a function of num_tokens. Did I miss anything here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But this only stands when the two values are independent of each other. Has it been decoupled already?

But I still see calculate_tile_tokens_dim is used in the fake registration part and some other places, which means tile_tokens_dim is still a function of num_tokens. Did I miss anything here?

The ideal tile_tokens_dim correlates to num_tokens and outside factors such as expert distribution. We don't have good heurstics yet. Ideally, we may predict a range of tile_tokens_dim according to num_tokens to reduce tuning space.

Is my understanding correct that register_fake() must return correct output shapes for torch.compile to work? If so, I see an possible issue--the output shape max_num_padded_tokens depends on tile_tokens_dim in current implementation, but we cannot determine tile_tokens_dim by heuristics because it's tuned by the autotuner. Is there any side effect to nail down tile_tokens_dim=128 in register_fake()?

Copy link
Collaborator

@hyukn hyukn Oct 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is my understanding correct that register_fake() must return correct output shapes for torch.compile to work?

I believe so. The fake part will do the shape inference, so it is expected to have the exact same shape as the custom op implementation.

A possible solution might be always using the max tile value (like 128) to pad the tensor, which will be easier for the fake part. Not sure whether it will be suboptimal or not.

And I wonder how the following op uses this padded tensor, as its shape has changed. Is there any extra information delivered?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When autotuner is enabled, the true tile_tokens_dim can be obtained from the return tuple (tile_tokens_dim, tactic) of choose_one() (which I just added today in de91deb). Or, when autotuner is disabled, estimate the tileN using the same heuristic. I haven't implemented both. Does it only affect torch.compile? If so, I'd like to have a follow-up PR for that.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think only torch.compile will be affected.

kernel_runners: List[TunableRunner] = []
for tile_tokens_dim_ in list(generate_power_of_2_between(start=8, end=128)):
kernel_runners += [
FP4BlockScaleMoERunner(
Copy link
Collaborator

@hyukn hyukn Oct 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand correctly, creating multiple runners is for tuning the tileN, and tile_tokens_dim can be independent of the num_tokens (this is important), then I suggest using tactics to represent them instead of different runners. Because we prefer using different runners to represent a different backend or implementation, and different tactics in a single runner to represent a different kernel config.

A proper modification can be:

  • 'Encode` it into different tactics and prepare all the CPP runner instances in a single Python runner.
  • In 'get_valid_tactics', create a production between tactic list and possible tileN values. Return a tuple (tile_tokens_dim, tactic) instead of a standalone tactic value to indicate the corresponding runner used for that tile value.
  • In forward, invoke the correct runner according to the tile_tokens_dim passed within the tactic tuple.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds reasonable. Let me revise it

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added in de91deb

@rosenrodt rosenrodt force-pushed the update-trtllm-moe-cubins branch 2 times, most recently from cca9b29 to a9b4191 Compare October 15, 2025 14:23
@rosenrodt
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21492 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21492 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #16225 completed with status: 'FAILURE'

@@ -918,7 +918,8 @@ def _create_tensor_like(self, origin_tensor: torch.Tensor,
# TODO: FIXME, sometimes the content of the tensor can affect the performance, like MOE
# One solution is to manituplate the tensor content to make it more like the real data
# during the tuning process. This can by controlled in the preparation phase by the runner.
return torch.zeros(shapes, dtype=dtype, device=device)
# It must not use all zero tensors. Otherwise the timing results become unreliable.
return torch.randint(-5, 5, shapes, device=device).to(dtype)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#6924 has been merged. Because using random data instead of all-zero tensors can be considered an improvement for the perf stabilization in general, I think we can also keep the code change here. Thanks a lot for the suggestion~

{
// returns (tileN, config)
std::vector<std::vector<int64_t>> tactics;
for (auto& [tileN, runner] : mRunners)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, I think we can do it directly in the Python code because it might be simpler. But CPP here is also fine.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nekorobov what's your opinion on this?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not have strong opinion. Whatever is easier and faster in your opinion :) Since CPP is already implemented I am fine having it this way

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have strong preference as well. Let's keep it as-is (cpp) so python side is cleaner.

)
# FIXME: temporarily disable tuning multiple runners due to kernel failure in test:
# python3 -m pytest tests/integration/defs/accuracy/test_llm_api_pytorch.py::TestDeepSeekR1::test_fp8_blockscale[throughput_mtp_trtllm]
tile_tokens_dim = calculate_tile_tokens_dim(hidden_states.shape[0],
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can tactics be shared by these runners with different values of tile_tokens_dim? Because we calculate this value only according to the input number of tokens. During the warm-up phase, it is set to be the maximum number of tokens. Then the tuning process will always rely on the runner with that value. Is this expected? Some potential issues might be:

  • Using a runner with the large tile to tune a small input problem size, which might be suboptimal.
  • Using tactics across different tile values. Some runners might not support others' tactics. Not sure if the test failures mentioned in the comment note is caused by this.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think autotuner tunes the "dynamic dimension" of the input tensors from 1, 2, 4, ..., up to some large max_batch size during the warm-up phase. So all shapes should be covered by autotuner.

Tactics are not interchangeable between runners. So for this MoE precision, we don't tune tileN due to an issue I met earlier. As result, the autotuner has no visibility into who is the best runner. It only sees the best tactic. calculate_tile_tokens_dim() fills the gap and chooses the right runner because the result is deterministic.

@rosenrodt rosenrodt force-pushed the update-trtllm-moe-cubins branch from a9b4191 to 96ed965 Compare October 16, 2025 10:43
@rosenrodt rosenrodt force-pushed the update-trtllm-moe-cubins branch from 96ed965 to 3403a62 Compare October 16, 2025 10:57
@rosenrodt
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21572 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21572 [ run ] completed with state ABORTED
LLM/main/L0_MergeRequest_PR #16282 (Blue Ocean) completed with status: ABORTED

@rosenrodt
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21630 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21630 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #16297 completed with status: 'FAILURE'

@rosenrodt rosenrodt requested a review from hyukn October 17, 2025 04:02
@rosenrodt
Copy link
Collaborator Author

rosenrodt commented Oct 17, 2025

/bot run

Rerun due to seemingly CI issue

[  FAILED  ] AsymmetricCaseTest0/AsymmetricalCacheTest.TestCase/0, where GetParam() = (1, 1, 1, 1, 1, 1, 4, 4, 4, 16, 4-byte object <00-00 00-00>, 2, false, false, false, false) (659 ms)
C++ exception with description "[TensorRT-LLM][ERROR] Assertion failed: libtensorrt_llm_nixl_wrapper.so can not be loaded correctly: libtensorrt_llm_nixl_wrapper.so: cannot open shared object file: No such file or directory (../tensorrt_llm/executor/cache_transmission/transferAgent.cpp:42)

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21657 [ run ] triggered by Bot

Copy link
Collaborator

@hyukn hyukn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks a lot for the effort~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants